158 research outputs found
Analysis of Quickselect under Yaroslavskiy's Dual-Pivoting Algorithm
There is excitement within the algorithms community about a new partitioning
method introduced by Yaroslavskiy. This algorithm renders Quicksort slightly
faster than the case when it runs under classic partitioning methods. We show
that this improved performance in Quicksort is not sustained in Quickselect; a
variant of Quicksort for finding order statistics. We investigate the number of
comparisons made by Quickselect to find a key with a randomly selected rank
under Yaroslavskiy's algorithm. This grand averaging is a smoothing operator
over all individual distributions for specific fixed order statistics. We give
the exact grand average. The grand distribution of the number of comparison
(when suitably scaled) is given as the fixed-point solution of a distributional
equation of a contraction in the Zolotarev metric space. Our investigation
shows that Quickselect under older partitioning methods slightly outperforms
Quickselect under Yaroslavskiy's algorithm, for an order statistic of a random
rank. Similar results are obtained for extremal order statistics, where again
we find the exact average, and the distribution for the number of comparisons
(when suitably scaled). Both limiting distributions are of perpetuities (a sum
of products of independent mixed continuous random variables).Comment: full version with appendices; otherwise identical to Algorithmica
versio
Analysis of pivot sampling in dual-pivot Quicksort: A holistic analysis of Yaroslavskiy's partitioning scheme
The final publication is available at Springer via http://dx.doi.org/10.1007/s00453-015-0041-7The new dual-pivot Quicksort by Vladimir Yaroslavskiy-used in Oracle's Java runtime library since version 7-features intriguing asymmetries. They make a basic variant of this algorithm use less comparisons than classic single-pivot Quicksort. In this paper, we extend the analysis to the case where the two pivots are chosen as fixed order statistics of a random sample. Surprisingly, dual-pivot Quicksort then needs more comparisons than a corresponding version of classic Quicksort, so it is clear that counting comparisons is not sufficient to explain the running time advantages observed for Yaroslavskiy's algorithm in practice. Consequently, we take a more holistic approach and give also the precise leading term of the average number of swaps, the number of executed Java Bytecode instructions and the number of scanned elements, a new simple cost measure that approximates I/O costs in the memory hierarchy. We determine optimal order statistics for each of the cost measures. It turns out that the asymmetries in Yaroslavskiy's algorithm render pivots with a systematic skew more efficient than the symmetric choice. Moreover, we finally have a convincing explanation for the success of Yaroslavskiy's algorithm in practice: compared with corresponding versions of classic single-pivot Quicksort, dual-pivot Quicksort needs significantly less I/Os, both with and without pivot sampling.Peer ReviewedPostprint (author's final draft
Applying Length-Dependent Stochastic Context-Free Grammars to RNA Secondary Structure Prediction
Weinberg F, Nebel M. Applying Length-Dependent Stochastic Context-Free Grammars to RNA Secondary Structure Prediction. Algorithms. 2011;4(4):223--238
Pivot Sampling in Dual-Pivot Quicksort
The new dual-pivot Quicksort by Vladimir Yaroslavskiy - used in Oracle's Java
runtime library since version 7 - features intriguing asymmetries in its
behavior. They were shown to cause a basic variant of this algorithm to use
less comparisons than classic single-pivot Quicksort implementations. In this
paper, we extend the analysis to the case where the two pivots are chosen as
fixed order statistics of a random sample and give the precise leading term of
the average number of comparisons, swaps and executed Java Bytecode
instructions. It turns out that - unlike for classic Quicksort, where it is
optimal to choose the pivot as median of the sample - the asymmetries in
Yaroslavskiy's algorithm render pivots with a systematic skew more efficient
than the symmetric choice. Moreover, the optimal skew heavily depends on the
employed cost measure; most strikingly, abstract costs like the number of swaps
and comparisons yield a very different result than counting Java Bytecode
instructions, which can be assumed most closely related to actual running time.Comment: presented at AofA 2014 (http://www.aofa14.upmc.fr/
Random generation of RNA secondary structures according to native distributions
Nebel M, Scheid A, Weinberg F. Random generation of RNA secondary structures according to native distributions. Algorithms for Molecular Biology. 2011;6(1): 24
Analysis of Branch Misses in Quicksort
The analysis of algorithms mostly relies on counting classic elementary
operations like additions, multiplications, comparisons, swaps etc. This
approach is often sufficient to quantify an algorithm's efficiency. In some
cases, however, features of modern processor architectures like pipelined
execution and memory hierarchies have significant impact on running time and
need to be taken into account to get a reliable picture. One such example is
Quicksort: It has been demonstrated experimentally that under certain
conditions on the hardware the classically optimal balanced choice of the pivot
as median of a sample gets harmful. The reason lies in mispredicted branches
whose rollback costs become dominating.
In this paper, we give the first precise analytical investigation of the
influence of pipelining and the resulting branch mispredictions on the
efficiency of (classic) Quicksort and Yaroslavskiy's dual-pivot Quicksort as
implemented in Oracle's Java 7 library. For the latter it is still not fully
understood why experiments prove it 10% faster than a highly engineered
implementation of a classic single-pivot version. For different branch
prediction strategies, we give precise asymptotics for the expected number of
branch misses caused by the aforementioned Quicksort variants when their pivots
are chosen from a sample of the input. We conclude that the difference in
branch misses is too small to explain the superiority of the dual-pivot
algorithm.Comment: to be presented at ANALCO 201
The Stack-Size of Combinatorial Tries Revisited
In the present paper we consider a generalized class of extended binary trees in which leaves are distinguished in order to represent the location of a key within a trie of the same structure. We prove an exact asymptotic equivalent to the average stack-size of trees with α internal nodes and β leaves corresponding to keys; we assume that all trees with the same parameters α and β have the same probability. The assumption of that uniform model is motivated for example by the usage of tries for the compression of blockcodes. Furthermore, we will prove asymptotics for the r-th moments of the stack-size and we will show that a normalized stack-size possesses a theta distribution in the limit
GeFaST: An improved method for OTU assignment by generalising Swarm’s fastidious clustering approach
Müller R, Nebel M. GeFaST: An improved method for OTU assignment by generalising Swarm’s fastidious clustering approach. BMC Bioinformatics. 2018;19(1): 321.**Background**: Massive genomic data sets from high-throughput sequencing allow for new insights into complex biological systems such as microbial communities. Analyses of their diversity and structure are typically preceded by clustering millions of 16S rRNA gene sequences into OTUs. Swarm introduced a new clustering strategy which addresses important conceptual and performance issues of the popular de novo clustering approach. However, some parts of the new strategy, e.g. the fastidious option for increased clustering quality, come with their own restrictions.
**Results:** In this paper, we present the new exact, alignment-based de novo clustering tool GeFaST, which implements a generalisation of Swarm’s fastidious clustering. Our tool extends the fastidious option to arbitrary clustering thresholds and allows to adjust its greediness. GeFaST was evaluated on mock-community and natural data and achieved higher clustering quality and performance for small to medium clustering thresholds compared to Swarm and other de novo tools. Clustering with GeFaST was between 6 and 197 times as fast as with Swarm, while the latter required up to 38% less memory for non-fastidious clustering but at least three times as much memory for fastidious clustering.
**Conclusions:** GeFaST extends the scope of Swarm’s clustering strategy by generalising its fastidious option, thereby allowing for gains in clustering quality, and by increasing its performance (especially in the fastidious case). Our evaluations showed that GeFaST has the potential to leverage the use of the (fastidious) clustering strategy for higher thresholds and on larger data sets
- …